Fast rates by transferring from auxiliary hypotheses

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

AccGenSVM: Selectively Transferring from Previous Hypotheses

In our research, we consider transfer learning scenarios where a target learner does not have access to the source data, but instead to hypotheses or models induced from it. This is called the Hypothesis Transfer Learning (HTL) problem. Previous approaches concentrated on transferring source hypotheses as a whole. We introduce a novel method for selectively transferring elements from previous h...

متن کامل

Transferring auxiliary knowledge to enhance heterogeneous web service clustering

The growing number of web services puts forward higher requirements for searching desired web services, and clustering web services can greatly enhance the discovery of web service. Most existing clustering approaches are designed to handle long text documents. However, the descriptions of most services are in the form of short text, which impairs the quality of clustering owing to the lack of ...

متن کامل

Bayesian Confirmation and Auxiliary Hypotheses Revisited: A Reply to Strevens

Michael Strevens [2001] has proposed an interesting and novel Bayesian analysis of the Quine-Duhem (Q–D) problem (i.e., the problem of auxiliary hypotheses). Strevens’s analysis involves the use of a simplifying idealization concerning the original Q–D problem. We will show that this idealization is far stronger than it might appear. Indeed, we argue that Strevens’s idealization oversimplifies ...

متن کامل

Learning by Transferring from Unsupervised Universal Sources

Category classifiers trained from a large corpus of annotated data are widely accepted as the sources for (hypothesis) transfer learning. Sources generated in this way are tied to a particular set of categories, limiting their transferability across a wide spectrum of target categories. In this paper, we address this largelyoverlooked yet fundamental source problem by both introducing a systema...

متن کامل

From Stochastic Mixability to Fast Rates

Empirical risk minimization (ERM) is a fundamental learning rule for statistical learning problems where the data is generated according to some unknown distribution P and returns a hypothesis f chosen from a fixed class F with small loss `. In the parametric setting, depending upon (`,F ,P) ERM can have slow (1/ √ n) or fast (1/n) rates of convergence of the excess risk as a function of the sa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Machine Learning

سال: 2016

ISSN: 0885-6125,1573-0565

DOI: 10.1007/s10994-016-5594-4